Dr. Timnit Gebru

Key Takeaways from Dr. Gebru’s Middlebury Presentation
Author

Anweshan Adhikari

Introduction to Dr. Timnit Gebru

Named as one of Time’s most influentional people of 2022, Dr. Timnit Gebru is an Ethiopian-born computer scientist who has made significant contribuitions in advocating for diversity in technology, especially in the realm of ethical artificial intelligence. Gebru was born in a family of an engineer and an economist in Ethiopia. After recieving political asylumn in the US, an experience she remembers as being miserable, Gebbru experienced many instances of systematic racism in the US. Early in her career, Gibru worked as an audio engineer at apple where she developed signal processong algorithms for the first iPad. Despite her major contributions at Apple, Gibru left the company during the #MeToo movement started by apple emplooyees at #AppleToo to speak against the discrimination and disparity regarding issues like pay equity and acountability across the leadership at Apple. In her time at Google, Gibru became a subject of major controvercy in news media and the technology world after her contract was terminated for not withdrawing the unpublished paper on the risks of large language models. Today, Dr.Timnit Gebry stands tall as a prominent scientist in the tech industy, fearlessly advocating for elimination of bias in Ai and the rights of marginalized groups impacted by AI.

On 24 April, Dr. Gebru will be giving a lecture on the impact of Artificail intelligence at Middlebury College. Dr. Gebru will also be visiting our Machine Learning Class to engage with students and answer their our questions.

Summary of Dr. Gebru’s Lecture

In 2020, as part of a Tutorial on Fairness, Accountability, Transparency, and Ethics (FATE) in Computer Vision, at the conference on Computer Vision and Pattern Recognition, Dr. Timnit Gebru highlighted severe issues related to AI, specifically about Computer Vision. She emphasizes the need for adressing the risks of facial recognition systems not only in terms of how the technogy is developed but also how the technology is being deployed. Through out her lecture, Dr. Gebru uses exmples such as the use of biased facial recognition softwares by Maryland police or Amazon advocating the use of these softwares in manners they were not meant to be used initially.

Dr. Gebru spoke adressed that eliminating bias in facial recognition systems does not mean just diversifying a dataset. She highlighted that ethical and social considerations also need to be taken into account. Gebru pointed to the example of transgender women’s YouTube profiles being used to train facial recognition systems, which raises ethical concerns about the use of personal data.

Dr. Gebru also emphasized that acknowledging the presence of bias does not necessarily eradicate it in the system. In some cases, making a system fair does not mean making it work equally well for everyone, as the fundamental concept of the system could be flawed. She gave an example of how flawed tools like facial recognition systems have been used to hurt innocent people and even used as evidence in court cases.

Pointing to companies like Amazon who have failed to ensure that their facial recognition technology is not biased, Dr.Gebru questions the need to understand who is developing and promoting these technologies. I personally feel like companies should not promote their product, knowing well that their product has defects. Amazon promoting their biased facial recognition software to police bodies is like selling a faulty compass to a lost hiker - it only leads them further astray. Dr. Gebru also highlights that many ethics boards in companies who are working on AI only represent the most dominant and powerful groups, leaving those who are at the negative receiving end of these technologies out of the conversation.

Gebru emphasized that fairness is not all about datasets or maths but also about society and its ethics. It’s important to know how are these technoligies being developed and how are the technologies being deployed. People working in computer vision need to understand how dangerously their work is being used and take responsibility that it is being deployed in an ethical manner.

Questions

  1. Earlier in your career, you wondered how tech giants like apple manage[d] to avoid scrutiny for their failure to address and eradicate unfairness within their organizations.Despite occasional concerted efforts from employees or other stakeholders, such as workers’ unions, it appears that little had improved within the technology industry. What factors, do you think, are allowing tech industries to get away from taking accountability and addressing these issues?

  2. If tech industries were to develop AI in the same manner as it is being done today, would you think it would be better to completely stop this technology or do you think the benefits of its existence outweights the concerns surrounding it?

  3. Today, AI related projects are potentially growing exponentially and given the concerns of its misuse, what are the most urgent steps that companies but also its users need to take to ensure AI is developed, and used in a fair manner.

  4. The capabilities of AI that we see today is far different that its capibilities when AI was first introduced. Moreover, AI is a technology that relies on years of study done on it. How far would we have to go to completely eliminate any form of bias in today’s AI.

Reflection on Dr. Gebru’s Talk

Dr. Gebru’s talk was based on the pillars of her organization Distributed AI Research Institute(DAIR), which centers around understanding and mitigating the harms of current AI systems and imagining a different technological future. Her talk featured quotes from popular figures in the AI industry such as Sam Altman and Karen Hao. Dr. Gebru shed some much-needed light on Open AI’s and other AI companies immoral practices such as exploting cheap labour from poor nations and stealing content from artists to train their models without any permission or compensations. Although I had acquired a good understanding of these topics by following renowned experts who work on AI biasness on Twitter, Dr. Gebru introduced me to a range of whole new topics such as Eugenics, Transhumanism, and Effective Altruism that I had previously never looked into.

Dr. Gebru’s perspective on AI and its development being rooted in the 20th century Anglo American eugenic tradition is certainly an interesting viewpoint. Based on my understanding, Dr. Gebru stated that the combination of eugenics and cosmism rooted on sentient AI, would introduce a new phase of the evlution of human species. However, I was left uncertain as to why eugenics would be necessary in this scenario, given powerful technologies should theoretically be able to adapt to all individuals without the need for policies aiming to improve human stock or restrict miscegenation. The conversation on the TESCREAL bundle did leave me feeling, however, it also opened a world of new conversations/topics that I never knew had connections to AI.

Personally, I found the conversation about Utopia and who this Utopia is being created for thought-provoking. The prospect of gaining wealth and intelligience through AI is very lucritive for every individual. However, Dr. Gebru raised a great question: what if the benefits of these technologies are eventually be limited to the 1% population of the world, who in my perspective are doing well without the existence of AI. Prior to attending the lecture, I never considered the potential consequences of contributes to training a model but I definitely do now.

I really found it insightful when Dr. Gebru raised the history of companies evading accountability and the ease with which comapnies devoloing AGI’s will be able to evade accountability for the biasness in their models. I agree with her notion that anthropomorphizing artifacts allows builders to evade responsibility and there should definitely be policies placed well on time. Dr. Gebru’s experiences with sexism and harassment in the AI industry, such as facing resistance when appealing for the renaming of a conference named after adult websites, are troubling and highlight the urgent need for greater inclusion and accountability in the field. In addition, the lack of diversity within the AI industry is a cause for concern. When models are developed exclusively by individuals of a particular gender, race, or other demographic factors, they may not prioritize checking for biases that affect other groups, which could lead to detrimental outcomes. Also, I believe that there has to be limitations on how AI Models obtain their training data and who can build these models. I remember when DALL-E was first introduced, it caused major controversy due to its failure in removing signatures/watermarks from artworks that it was using to generate new artworks. Such instances have the potential to diminish the work of artists who have invested significant resources into creating their own original artworks. Dr. Gebru’s example of the language detection model developed by Ghana ALP, which outperformed Microsoft’s model in terms of BLEU score, highlights the need for increased awareness around smaller companies and organizations working towards advancements in the AI industry. It is of utmost importance to understand that larger tech companies such as Microsoft hold significant power and influence within the industry, which could completely eliminate the existence of smaller entities.

Conclusion

Dr. Timnit Gebru has been a prominent voice in raising awareness on ethical AI development. The knowledge that she shared in the lecture must have been a cumilation of the years of experience she has had in the industry. While individuals may hold varying perspectives on topics such as altruism and eugenics, it should not be unacceptable to tolerate the development of biased models that are trained using unethical practices. The fact that such models have the potential to overshadow entire industries is a greater cause for concern. I believe that there is a need for organizations like Future of Life to exist and have more governing power on who gets to develop AI models and the core procedures/tests that all organizations must follow to ensure that their model is unbiased and ethical.

AI has the potential to solve many modern world problems. Dr. Gebru gave examples of work done at NYU AD in detecting images of bombs and in Costa Rica where AI is used to recognize plants and improve agricture. However, a technology that has the potential to be so powerful will also has/will have major shortcomings. Hence, AI technologies must be regulated and it is crucial to encourage independent researchers like Dr. Timnit Gebru, as they play a pivotal role in ensuring that the AI models that the world will rely on in the future are developed in an unbiased and ethical manner.